110 research outputs found

    Monetary policy analysis in a small open economy using Bayesian cointegrated structural VARs

    Get PDF
    Structural VARs have been extensively used in empirical macroeconomics during the last two decades, particularly in analyses of monetary policy. Existing Bayesian procedures for structural VARs are at best confined to a severly limited handling of cointegration restrictions. This paper extends the Bayesian analysis of structural VARs to cover cointegrated processes with an arbitrary number of cointegrating relations and general linear restrictions on the cointegration space. A reference prior distribution with an optional small open economy effect is proposed and a Gibbs sampler is derived for a straightforward evaluation of the posterior distribution. The methods are used to analyze the effects of monetary policy in Sweden. JEL Classification: C11, C32, E52Counterfactual experiments, Impulse responses, monetary policy, Structural, Vector autoregression

    Physiological Gaussian Process Priors for the Hemodynamics in fMRI Analysis

    Full text link
    Background: Inference from fMRI data faces the challenge that the hemodynamic system that relates neural activity to the observed BOLD fMRI signal is unknown. New Method: We propose a new Bayesian model for task fMRI data with the following features: (i) joint estimation of brain activity and the underlying hemodynamics, (ii) the hemodynamics is modeled nonparametrically with a Gaussian process (GP) prior guided by physiological information and (iii) the predicted BOLD is not necessarily generated by a linear time-invariant (LTI) system. We place a GP prior directly on the predicted BOLD response, rather than on the hemodynamic response function as in previous literature. This allows us to incorporate physiological information via the GP prior mean in a flexible way, and simultaneously gives us the nonparametric flexibility of the GP. Results: Results on simulated data show that the proposed model is able to discriminate between active and non-active voxels also when the GP prior deviates from the true hemodynamics. Our model finds time varying dynamics when applied to real fMRI data. Comparison with Existing Method(s): The proposed model is better at detecting activity in simulated data than standard models, without inflating the false positive rate. When applied to real fMRI data, our GP model in several cases finds brain activity where previously proposed LTI models does not. Conclusions: We have proposed a new non-linear model for the hemodynamics in task fMRI, that is able to detect active voxels, and gives the opportunity to ask new kinds of questions related to hemodynamics.Comment: 18 pages, 14 figure

    A Bayesian Heteroscedastic GLM with Application to fMRI Data with Motion Spikes

    Full text link
    We propose a voxel-wise general linear model with autoregressive noise and heteroscedastic noise innovations (GLMH) for analyzing functional magnetic resonance imaging (fMRI) data. The model is analyzed from a Bayesian perspective and has the benefit of automatically down-weighting time points close to motion spikes in a data-driven manner. We develop a highly efficient Markov Chain Monte Carlo (MCMC) algorithm that allows for Bayesian variable selection among the regressors to model both the mean (i.e., the design matrix) and variance. This makes it possible to include a broad range of explanatory variables in both the mean and variance (e.g., time trends, activation stimuli, head motion parameters and their temporal derivatives), and to compute the posterior probability of inclusion from the MCMC output. Variable selection is also applied to the lags in the autoregressive noise process, making it possible to infer the lag order from the data simultaneously with all other model parameters. We use both simulated data and real fMRI data from OpenfMRI to illustrate the importance of proper modeling of heteroscedasticity in fMRI data analysis. Our results show that the GLMH tends to detect more brain activity, compared to its homoscedastic counterpart, by allowing the variance to change over time depending on the degree of head motion

    Sparse Partially Collapsed MCMC for Parallel Inference in Topic Models

    Full text link
    Topic models, and more specifically the class of Latent Dirichlet Allocation (LDA), are widely used for probabilistic modeling of text. MCMC sampling from the posterior distribution is typically performed using a collapsed Gibbs sampler. We propose a parallel sparse partially collapsed Gibbs sampler and compare its speed and efficiency to state-of-the-art samplers for topic models on five well-known text corpora of differing sizes and properties. In particular, we propose and compare two different strategies for sampling the parameter block with latent topic indicators. The experiments show that the increase in statistical inefficiency from only partial collapsing is smaller than commonly assumed, and can be more than compensated by the speedup from parallelization and sparsity on larger corpora. We also prove that the partially collapsed samplers scale well with the size of the corpus. The proposed algorithm is fast, efficient, exact, and can be used in more modeling situations than the ordinary collapsed sampler.Comment: Accepted for publication in Journal of Computational and Graphical Statistic

    Speeding Up MCMC by Delayed Acceptance and Data Subsampling

    Full text link
    The complexity of the Metropolis-Hastings (MH) algorithm arises from the requirement of a likelihood evaluation for the full data set in each iteration. Payne and Mallick (2015) propose to speed up the algorithm by a delayed acceptance approach where the acceptance decision proceeds in two stages. In the first stage, an estimate of the likelihood based on a random subsample determines if it is likely that the draw will be accepted and, if so, the second stage uses the full data likelihood to decide upon final acceptance. Evaluating the full data likelihood is thus avoided for draws that are unlikely to be accepted. We propose a more precise likelihood estimator which incorporates auxiliary information about the full data likelihood while only operating on a sparse set of the data. We prove that the resulting delayed acceptance MH is more efficient compared to that of Payne and Mallick (2015). The caveat of this approach is that the full data set needs to be evaluated in the second stage. We therefore propose to substitute this evaluation by an estimate and construct a state-dependent approximation thereof to use in the first stage. This results in an algorithm that (i) can use a smaller subsample m by leveraging on recent advances in Pseudo-Marginal MH (PMMH) and (ii) is provably within O(m−2)O(m^{-2}) of the true posterior.Comment: Accepted for publication in Journal of Computational and Graphical Statistic

    Speeding Up MCMC by Efficient Data Subsampling

    Full text link
    We propose Subsampling MCMC, a Markov Chain Monte Carlo (MCMC) framework where the likelihood function for nn observations is estimated from a random subset of mm observations. We introduce a highly efficient unbiased estimator of the log-likelihood based on control variates, such that the computing cost is much smaller than that of the full log-likelihood in standard MCMC. The likelihood estimate is bias-corrected and used in two dependent pseudo-marginal algorithms to sample from a perturbed posterior, for which we derive the asymptotic error with respect to nn and mm, respectively. We propose a practical estimator of the error and show that the error is negligible even for a very small mm in our applications. We demonstrate that Subsampling MCMC is substantially more efficient than standard MCMC in terms of sampling efficiency for a given computational budget, and that it outperforms other subsampling methods for MCMC proposed in the literature.Comment: Main changes: The theory has been significantly revise

    Bayesian approaches to cointegration

    Get PDF
    The degree of empirical support of a priori plausible structures on the cointegration vectors has a central role in the analysis of cointegration. Villani (2000) and Strachan and van Dijk (2003) have recently proposed finite sample Bayesian procedures to calculate the posterior probability of restrictions on the cointegration space, using the existence of a uniform prior distribution on the cointegration space as the key ingredient. The current paper extends this approach to the empirically important case with different restrictions on the individual cointegration vectors. Prior distributions are proposed and posterior simulation algorithms are developed. Consumers' expenditure data for the US is used to illustrate the robustness of the results to variations in the prior. A simulation study shows that the Bayesian approach performs remarkably well in comparison to other more established methods for testing restrictions on the cointegration vectors
    • …
    corecore